Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 1.882
Filtrar
1.
Neuroimage ; 292: 120604, 2024 Apr 10.
Artigo em Inglês | MEDLINE | ID: mdl-38604537

RESUMO

Despite its widespread use, resting-state functional magnetic resonance imaging (rsfMRI) has been criticized for low test-retest reliability. To improve reliability, researchers have recommended using extended scanning durations, increased sample size, and advanced brain connectivity techniques. However, longer scanning runs and larger sample sizes may come with practical challenges and burdens, especially in rare populations. Here we tested if an advanced brain connectivity technique, dynamic causal modeling (DCM), can improve reliability of fMRI effective connectivity (EC) metrics to acceptable levels without extremely long run durations or extremely large samples. Specifically, we employed DCM for EC analysis on rsfMRI data from the Human Connectome Project. To avoid bias, we assessed four distinct DCMs and gradually increased sample sizes in a randomized manner across ten permutations. We employed pseudo true positive and pseudo false positive rates to assess the efficacy of shorter run durations (3.6, 7.2, 10.8, 14.4 min) in replicating the outcomes of the longest scanning duration (28.8 min) when the sample size was fixed at the largest (n = 160 subjects). Similarly, we assessed the efficacy of smaller sample sizes (n = 10, 20, …, 150 subjects) in replicating the outcomes of the largest sample (n = 160 subjects) when the scanning duration was fixed at the longest (28.8 min). Our results revealed that the pseudo false positive rate was below 0.05 for all the analyses. After the scanning duration reached 10.8 min, which yielded a pseudo true positive rate of 92%, further extensions in run time showed no improvements in pseudo true positive rate. Expanding the sample size led to enhanced pseudo true positive rate outcomes, with a plateau at n = 70 subjects for the targeted top one-half of the largest ECs in the reference sample, regardless of whether the longest run duration (28.8 min) or the viable run duration (10.8 min) was employed. Encouragingly, smaller sample sizes exhibited pseudo true positive rates of approximately 80% for n = 20, and 90% for n = 40 subjects. These data suggest that advanced DCM analysis may be a viable option to attain reliable metrics of EC when larger sample sizes or run times are not feasible.

2.
Vet Med Sci ; 10(3): e1444, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38581306

RESUMO

BACKGROUND: Genome-wide association studies (GWAS) is a useful tool for the detection of disease or quantitative trait-related genetic variations in the veterinary field. For a binary trait, a case/control experiment is designed in GWAS. However, there is limited information on the optimal case/control and sample size in GWAS. OBJECTIVES: In this study, it was aimed to detect the effects of case/control ratio and sample size for GWAS using computer simulation under certain assumptions. METHOD: Using the PLINK software, we simulated three different disease scenarios. In scenario 1, we simulated 10 different case/control ratios with increasing ratio of cases to controls. In scenario 2, we did versa of scenario 1 with the increasing ratio of controls to cases. In scenarios 1 and 2, sample size gradually was increased with the change case/control ratios. In scenario 3, the total sample size was fixed to 2000 to see real effects of case/control ratio on the number of disease-related single nucleotide polymorphisms (SNPs). RESULTS: The results showed that the number of disease-related SNPs were the highest when the case/control ratio is close to 1:1 in scenarios 1 and 2 and did not change with an increase in sample size. Similarly, the number of disease-related SNPs was the highest in case/control ratios 1:1 in scenario 3. However, unbalanced case/control ratio caused the detection of lower number of disease-related SNPs in scenario 3. The estimated average power of SNPs was highest when case/control ratio is 1:1 in all scenarios. CONCLUSIONS: All findings led to the conclusion that an increase in sample size may enhance the statistical power of GWAS when the number of cases is small. In addition, case/control ratio 1:1 may be the optimal ratio for GWAS. These findings may be valuable not only for veterinary field but also for human clinical experiments.


Assuntos
Estudo de Associação Genômica Ampla , Polimorfismo de Nucleotídeo Único , Humanos , Animais , Estudo de Associação Genômica Ampla/veterinária , Estudo de Associação Genômica Ampla/métodos , Simulação por Computador , Tamanho da Amostra , Fenótipo
3.
Stat Med ; 43(10): 1973-1992, 2024 May 10.
Artigo em Inglês | MEDLINE | ID: mdl-38634314

RESUMO

The expected value of the standard power function of a test, computed with respect to a design prior distribution, is often used to evaluate the probability of success of an experiment. However, looking only at the expected value might be reductive. Instead, the whole probability distribution of the power function induced by the design prior can be exploited. In this article we consider one-sided testing for the scale parameter of exponential families and we derive general unifying expressions for cumulative distribution and density functions of the random power. Sample size determination criteria based on alternative summaries of these functions are discussed. The study sheds light on the relevance of the choice of the design prior in order to construct a successful experiment.


Assuntos
Teorema de Bayes , Humanos , Probabilidade , Tamanho da Amostra
4.
J Biopharm Stat ; : 1-20, 2024 Apr 19.
Artigo em Inglês | MEDLINE | ID: mdl-38639571

RESUMO

There are many Bayesian design methods allowing for the incorporation of historical data for sample size determination (SSD) in situations where the outcome in the historical data is the same as the outcome of a new study. However, there is a dearth of methods supporting the incorporation of data from a previously completed clinical trial that investigated the same or similar treatment as the new trial but had a primary outcome that is different. We propose a simulation-based Bayesian SSD framework using the partial-borrowing scale transformed power prior (straPP). The partial-borrowing straPP is developed by applying a novel scale transformation to a traditional power prior on the parameters from the historical data model to make the information better align with the new data model. The scale transformation is based on the assumption that the standardized parameters (i.e., parameters multiplied by the square roots of their respective Fisher information matrices) are equal. To illustrate the method, we present results from simulation studies that use real data from a previously completed clinical trial to design a new clinical trial with a primary time-to-event endpoint.

5.
J Biopharm Stat ; : 1-20, 2024 Apr 14.
Artigo em Inglês | MEDLINE | ID: mdl-38615361

RESUMO

Indirect mechanisms of cancer immunotherapies result in delayed treatment effects that vary among patients. Consequently, the use of the log-rank test in trial design and analysis can lead to significant power loss and pose additional challenges for interim decisions in adaptive designs. In this paper, we describe patients' survival using a piecewise proportional hazard model with random lag time and propose an adaptive promising zone design for cancer immunotherapy with heterogeneous delayed effects. We provide solutions for calculating conditional power and adjusting the critical value for the log-rank test with interim data. We divide the sample space into three zones - unfavourable, promising, and favourable -based on re-estimations of the survival parameters, the log-rank test statistic at the interim analysis, and the initial and maximum sample sizes. If the interim results fall into the promising zone, the sample size is increased; otherwise, it remains unchanged. We show through simulations that our proposed approach has greater overall power than the fixed sample design and similar power to the matched group sequential trial. Furthermore, we confirm that critical value adjustment effectively controls the type I error rate inflation. Finally, we provide recommendations on the implementation of our proposed method in cancer immunotherapy trials.

6.
Clin Trials ; : 17407745241240401, 2024 Apr 15.
Artigo em Inglês | MEDLINE | ID: mdl-38618916

RESUMO

In the last few years, numerous novel designs have been proposed to improve the efficiency and accuracy of phase I trials to identify the maximum-tolerated dose (MTD) or the optimal biological dose (OBD) for noncytotoxic agents. However, the conventional 3+3 approach, known for its and poor performance, continues to be an attractive choice for many trials despite these alternative suggestions. The article seeks to underscore the importance of moving beyond the 3+3 design by highlighting a different key element in trial design: the estimation of sample size and its crucial role in predicting toxicity and determining the MTD. We use simulation studies to compare the performance of the most used phase I approaches: 3+3, Continual Reassessment Method (CRM), Keyboard and Bayesian Optimal Interval (BOIN) designs regarding three key operating characteristics: the percentage of correct selection of the true MTD, the average number of patients allocated per dose level, and the average total sample size. The simulation results consistently show that the 3+3 algorithm underperforms in comparison to model-based and model-assisted designs across all scenarios and metrics. The 3+3 method yields significantly lower (up to three times) probabilities in identifying the correct MTD, often selecting doses one or even two levels below the actual MTD. The 3+3 design allocates significantly fewer patients at the true MTD, assigns higher numbers to lower dose levels, and rarely explores doses above the target dose-limiting toxicity (DLT) rate. The overall performance of the 3+3 method is suboptimal, with a high level of unexplained uncertainty and significant implications for accurately determining the MTD. While the primary focus of the article is to demonstrate the limitations of the 3+3 algorithm, the question remains about the preferred alternative approach. The intention is not to definitively recommend one model-based or model-assisted method over others, as their performance can vary based on parameters and model specifications. However, the presented results indicate that the CRM, Keyboard, and BOIN designs consistently outperform the 3+3 and offer improved efficiency and precision in determining the MTD, which is crucial in early-phase clinical trials.

7.
J Biopharm Stat ; : 1-15, 2024 Apr 15.
Artigo em Inglês | MEDLINE | ID: mdl-38619921

RESUMO

Single-arm phase II trials are very common in oncology. A fixed sample trial may lack sufficient power if the true efficacy is less than the assumed one. Adaptive designs have been proposed in the literature. We propose a Simon's design based, adaptive sequential design. Simon's design is the most used fixed sample design for single-arm phase II oncology trials. A prominent feature of Simon's design is that it minimizes the sample size when there is no clinically meaningful efficacy. We identify Simon's trial as a special group sequential design. Established methods for sample size re-estimation (SSR) can be readily applied to Simon's design. Simulations show that simply adding SSR to Simon's design may still not provide desirable power. We propose some expansions to Simon's design. The expanded design with SSR can provide even more power.

8.
Artigo em Inglês | MEDLINE | ID: mdl-38621456

RESUMO

OBJECTIVE: To conceptualise a composite primary endpoint for parallel-group RCTs of exercise-based cardiac rehabilitation interventions, and to explore its application and statistical efficiency. DESIGN: We conducted a statistical exploration of sample size requirements. We combined exercise capacity and physical activity for the composite endpoint, both being directly related to reduced premature mortality in cardiac patients. Based on smallest detectable and minimal clinically important changes (change in exercise capacity of 15W and change in physical activity of 10 min/day), the composite endpoint combines two dichotomous endpoints (achieved/not achieved). To examine statistical efficiency, we compared sample size requirements based on the composite endpoint to single endpoints using data from two completed cardiac rehabilitation trials. SETTING: Cardiac rehabilitation phase III PARTICIPANTS: Cardiac rehabilitation patients INTERVENTIONS: Not applicable MAIN OUTCOME MEASURE(S): Exercise capacity (Pmax assessed by incremental cycle ergometry) and physical activity (daily minutes of moderate to vigorous physical activity assessed by accelerometry) RESULTS: Expecting, e.g., a 10% between-group difference and improvement in the clinical outcome, the composite endpoint would require a sample size increase by up to 21% or 61%, depending on the dataset. When expecting a 10% difference and designing an intervention with the aim of non-deterioration, the composite endpoint would allow to reduce the sample size by up to 55% or 70%. CONCLUSIONS: Trialists may consider the utility of the composite endpoint for future studies in exercise-based cardiac rehabilitation, which could reduce sample size requirements. However, perhaps surprisingly at first, the composite endpoint could also lead to an increased sample size needed, depending on the observed baseline proportions in the trial population and the aim of the intervention.

9.
Open Life Sci ; 19(1): 20220866, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38633413

RESUMO

We recruited four aquaporin-4 seropositive optic neuritis patients (five eyes) who received glucocorticoid treatment and underwent optical coherence tomography examination. Baseline medians of the macular ganglion cell layer plus inner plexiform layer (mGCIPL) thickness and volume for the eye of interest were 79.67 µm (73.664 ± 18.497 µm) and 0.58 mm3 (0.534 ± 0.134 mm3), respectively. At 2 months, the medians of the mGCIPL thickness and volume were 60.00 µm (51.576 ± 12.611 µm) and 0.44 mm3 (0.376 ± 0.091 mm3), respectively. At 6 months, the medians of the mGCIPL thickness and volume were 59.55 µm (46.288 ± 11.876 µm) and 0.44 mm3 (0.336 ± 0.084 mm3), respectively. Sample size estimate was achieved using two methods based on the mGCIPL thickness and volume data, with five effect sizes considered. The estimate based on the mGCIPL volume showed that 206 patients were needed at the 6-month follow-up; the power was 80% and effect size was 20%. In conclusion, this study detected retinal damage in aquaporin-4 seropositive optic neuritis patients by optical coherence tomography, and estimated the sample size for two-sample parallel designed clinical trials using two methods.

10.
Cureus ; 16(3): e56418, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38638715

RESUMO

Background Organ and body development greatly varies in pediatric patients from year to year. Therefore, the incidence of each adverse event following phenobarbital (PB) administration would vary with age. However, in clinical trials, increasing the sample size of pediatric patients in each age group has been challenging. Therefore, previous studies were conducted by dividing pediatric patients into three or four age groups based on the development stage. Although these results were useful in clinical settings, information on adverse events that occurred at one-year age increments in pediatric patients could further enhance treatment and care. Objectives This study investigated in one-year age increments the occurrence tendency of each adverse event following PB administration in pediatric patients. Methods This study used data obtained from the U.S. Food and Drug Administration Adverse Event Reporting System (FAERS). Two inclusion criteria were set: (1) treatment with PB between January 2004 and June 2023 and (2) age 0-15 years. Using the cutoff value obtained using the Wilcoxon-Mann-Whitney test by the minimum p-value approach, this study explored changes in the occurrence tendency of each adverse event in one-year age increments. At the minimum p-value of <0.05, the age corresponding to this p-value was determined as the cutoff value. Conversely, at the minimum p-value of ≥0.05, the cutoff value was considered nonexistent. Results This study investigated all types of adverse events and explored the cutoff value for each adverse event. We identified 34, 16, 15, nine, five, five, eight, three, and eight types of adverse events for the cutoff values of ≤3/>3, ≤4/>4, ≤5/>5, ≤6/>6, ≤7/>7, ≤8/>8, ≤9/>9, ≤10/>10, and ≤11/>11 years, respectively. Conclusions This study demonstrated that adverse events requiring attention in pediatric patients varied with age. The findings help in the improvement of treatment and care in the pediatric clinical settings.

11.
Biom J ; 66(3): e2300240, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38637304

RESUMO

Rank methods are well-established tools for comparing two or multiple (independent) groups. Statistical planning methods for the computing the required sample size(s) to detect a specific alternative with predefined power are lacking. In the present paper, we develop numerical algorithms for sample size planning of pseudo-rank-based multiple contrast tests. We discuss the treatment effects and different ways to approximate variance parameters within the estimation scheme. We further compare pairwise with global rank methods in detail. Extensive simulation studies show that the sample size estimators are accurate. A real data example illustrates the application of the methods.


Assuntos
Algoritmos , Modelos Estatísticos , Tamanho da Amostra , Simulação por Computador
12.
Heliyon ; 10(6): e26897, 2024 Mar 30.
Artigo em Inglês | MEDLINE | ID: mdl-38533019

RESUMO

In the real-world, there are various situations when all units are not accessible of the respondent called unit non-response. The effect of unit non-response is a tricky matter for estimating the total number of unit. The present work highlights the interest about subpopulations (domains) in two affairs: i. if domains total of the supportive information is accessible ii. if domains total of the supportive variable does not access. The government needs to be introducing the actual facilities in these small domains. The supportive information is used to find out the estimate of the non respondent information and to apply this information for desired domains. Sometimes, it has been found that the accessible auxiliary variable for the domains might be positive shape. Therefore, it develops an appropriate model that has positive skewness. The present context highlighted the indirect method using a power-based estimation with calibration approach. By combining power based estimation and calibration technique, it is possible to obtain more accurate estimates for intended small domains. Even the supportive information is positively biased. This approach helps us in mitigating the effect of non-respondent and improving the overall reliability of the estimators. The simulation was conducted for different sizes 70 and 90 when nonresponse variable in the study variable. The results show that investigated power-based estimate provides better option over relevant exponential, ratio, and generalized regression estimators for intended domains.

13.
Stat Med ; 2024 Mar 28.
Artigo em Inglês | MEDLINE | ID: mdl-38545849

RESUMO

This study is to give a systematic account of sample size adaptation designs (SSADs) and to provide direct proof of the efficiency advantage of general SSADs over group sequential designs (GSDs) from a different perspective. For this purpose, a class of sample size mapping functions to define SSADs is introduced. Under the two-stage adaptive clinical trial setting, theorems are developed to describe the properties of SSADs. Sufficient conditions are derived and used to prove analytically that SSADs based on the weighted combination test can be uniformly more efficient than GSDs in a range of likely values of the true treatment difference δ $$ \delta $$ . As shown in various scenarios, given a GSD, a fully adaptive SSAD can be obtained that has sufficient statistical power similar to that of the GSD but has a smaller average sample size for all δ $$ \delta $$ in the range. The associated sample size savings can be substantial. A practical design example and suggestions on the steps to find efficient SSADs are also provided.

14.
J Biopharm Stat ; : 1-14, 2024 Mar 21.
Artigo em Inglês | MEDLINE | ID: mdl-38515269

RESUMO

In recent years, clinical trials utilizing a two-stage seamless adaptive trial design have become very popular in drug development. A typical example is a phase 2/3 adaptive trial design, which consists of two stages. As an example, stage 1 is for a phase 2 dose-finding study and stage 2 is for a phase 3 efficacy confirmation study. Depending upon whether or not the target patient population, study objectives, and study endpoints are the same at different stages, Chow (2020) classified two-stage seamless adaptive design into eight categories. In practice, standard statistical methods for group sequential design with one planned interim analysis are often wrongly directly applied for data analysis. In this article, following similar ideas proposed by Chow and Lin (2015) and Chow (2020), a statistical method for the analysis of a two-stage seamless adaptive trial design with different study endpoints and shifted target patient population is discussed under the fundamental assumption that study endpoints have a known relationship. The proposed analysis method should be useful in both clinical trials with protocol amendments and clinical trials with the existence of disease progression utilizing a two-stage seamless adaptive trial design.

15.
Pharm Stat ; 2024 Mar 20.
Artigo em Inglês | MEDLINE | ID: mdl-38509020

RESUMO

In randomised controlled trials, the outcome of interest could be recurrent events, such as hospitalisations for heart failure. If mortality rates are non-negligible, both recurrent events and competing terminal events need to be addressed when formulating the estimand and statistical analysis is no longer trivial. In order to design future trials with primary recurrent event endpoints with competing risks, it is necessary to be able to perform power calculations to determine sample sizes. This paper introduces a simulation-based approach for power estimation based on a proportional means model for recurrent events and a proportional hazards model for terminal events. The simulation procedure is presented along with a discussion of what the user needs to specify to use the approach. The method is flexible and based on marginal quantities which are easy to specify. However, the method introduces a lack of a certain type of dependence. This is explored in a sensitivity analysis which suggests that the power is robust in spite of that. Data from a randomised controlled trial, LEADER, is used as the basis for generating data for a future trial. Finally, potential power gains of recurrent event methods as opposed to first event methods are discussed.

16.
Heliyon ; 10(5): e27013, 2024 Mar 15.
Artigo em Inglês | MEDLINE | ID: mdl-38455536

RESUMO

Randomized selection trials are frequently used to compare experimental treatments that have the potential to be beneficial, but they often do not include a control group. While time-to-event endpoints are commonly applied in clinical investigations, methodologies for determining the required sample size for such endpoints, except exponential distribution, are lacking. In recent times, there has been a shift in clinical trials, with a growing emphasis on progression-free survival as a primary endpoint. However, the utilization of this measure has typically been restricted to specific time points for both sample size determination and analysis. This alteration in approach could wield a substantial influence on the clinical trial process, potentially diminishing the capacity to discern variances between treatment groups. In the calculation of sample sizes for randomized trials, this investigation operates under the assumption that the time-to-event endpoint conforms to either an exponential, Weibull, or generalized exponential distribution.

17.
J Biopharm Stat ; : 1-19, 2024 Mar 29.
Artigo em Inglês | MEDLINE | ID: mdl-38549502

RESUMO

The 2-in-1 design is becoming popular in oncology drug development, with the flexibility in using different endpoints at different decision time. Based on the observed interim data, sponsors can choose to seamlessly advance a small phase 2 trial to a full-scale confirmatory phase 3 trial with a pre-determined maximum sample size or remain in a phase 2 trial. While this approach may increase efficiency in drug development, it is rigid and requires a pre-specified fixed sample size. In this paper, we propose a flexible 2-in-1 design with sample size adaptation, while retaining the advantage of allowing an intermediate endpoint for interim decision-making. The proposed design reflects the needs of the recent FDA's Project FrontRunner initiative, which encourages the use of an earlier surrogate endpoint to potentially support accelerated approval with conversion to standard approval with long-term endpoints from the same randomized study. Additionally, we identify the interim decision cut-off to allow a conventional test procedure at the final analysis. Extensive simulation studies showed that the proposed design requires much a smaller sample size and shorter timeline than the simple 2-in-1 design, while achieving similar power. We present a case study in multiple myeloma to demonstrate the benefits of the proposed design.

18.
Genes (Basel) ; 15(3)2024 Mar 07.
Artigo em Inglês | MEDLINE | ID: mdl-38540403

RESUMO

The false discovery rate (FDR) is a widely used metric of statistical significance for genomic data analyses that involve multiple hypothesis testing. Power and sample size considerations are important in planning studies that perform these types of genomic data analyses. Here, we propose a three-rectangle approximation of a p-value histogram to derive a formula to compute the statistical power and sample size for analyses that involve the FDR. We also introduce the R package FDRsamplesize2, which incorporates these and other power calculation formulas to compute power for a broad variety of studies not covered by other FDR power calculation software. A few illustrative examples are provided. The FDRsamplesize2 package is available on CRAN.


Assuntos
Algoritmos , Software , Tamanho da Amostra , Projetos de Pesquisa , Genômica
19.
Sci Rep ; 14(1): 6865, 2024 03 22.
Artigo em Inglês | MEDLINE | ID: mdl-38514864

RESUMO

Cronobacter sakazakii (Cz) infections linked with powdered milk/flour (PMF) are on the increase in recent times. The current study aimed at assessing worldwide and regional prevalence of Cz in PMF. Cz-PMF-directed data were conscientiously mined in four mega-databases via topic-field driven PRISMA protocol without any restriction. Bivariate analysis of datasets was conducted and then fitted to random-intercept logistic mixed-effects regressions with leave-one-study-out-cross-validation (LOSOCV). Small-study effects were assayed via Egger's regression tests. Contributing factors to Cz contamination/detection in PMF were determined using 1000-permutation-bootstrapped meta-regressions. A total of 3761 records were found out of which 68 studies were included. Sample-size showed considerable correlation with Cz positivity (r = 0.75, p = 2.5e-17), Milkprod2020 (r = 0.33, p = 1.820e-03), and SuDI (r = - 0.30, p = 4.11e-03). The global prevalence of Cz in PMF was 8.39% (95%CI 6.06-11.51, PI: 0.46-64.35) with LOSOCV value of 7.66% (6.39-9.15; PI: 3.10-17.70). Cz prevalence in PMF varies significantly (p < 0.05) with detection methods, DNA extraction method, across continents, WHO regions, and world bank regions. Nation, detection method, world bank region, WHO region, and sample size explained 53.88%, 19.62%, 19.03%, 15.63%, and 9.22% of the true differences in the Cz prevalence in PMF, respectively. In conclusion, the results indicated that national will power in the monitoring and surveillance of Cz in PMF matched with adequate sample size and appropriate detection methods will go a long way in preventing Cz contamination and infections.


Assuntos
Cronobacter sakazakii , Cronobacter , Animais , Cronobacter sakazakii/genética , Fórmulas Infantis , Farinha , Leite , Pós , Prevalência , Microbiologia de Alimentos , Cronobacter/genética
20.
Qual Life Res ; 33(5): 1241-1256, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38427288

RESUMO

PURPOSE: Statistical power for response shift detection with structural equation modeling (SEM) is currently underreported. The present paper addresses this issue by providing worked-out examples and syntaxes of power calculations relevant for the statistical tests associated with the SEM approach for response shift detection. METHODS: Power calculations and related sample-size requirements are illustrated for two modelling goals: (1) to detect misspecification in the measurement model, and (2) to detect response shift. Power analyses for hypotheses regarding (exact) overall model fit and the presence of response shift are demonstrated in a step-by-step manner. The freely available and user-friendly R-package lavaan and shiny-app 'power4SEM' are used for the calculations. RESULTS: Using the SF-36 as an example, we illustrate the specification of null-hypothesis (H0) and alternative hypothesis (H1) models to calculate chi-square based power for the test on overall model fit, the omnibus test on response shift, and the specific test on response shift. For example, we show that a sample size of 506 is needed to reject an incorrectly specified measurement model, when the actual model has two-medium sized cross loadings. We also illustrate power calculation based on the RMSEA index for approximate fit, where H0 and H1 are defined in terms of RMSEA-values. CONCLUSION: By providing accessible resources to perform power analyses and emphasizing the different power analyses associated with different modeling goals, we hope to facilitate the uptake of power analyses for response shift detection with SEM and thereby enhance the stringency of response shift research.


Assuntos
Análise de Classes Latentes , Humanos , Modelos Estatísticos , Tamanho da Amostra , Qualidade de Vida
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...